Very Large-scale Low-rank Approximation
نویسندگان
چکیده
Low-rank approximation is commonly used to scale kernel-based algorithms to large-scale applications containing as many as several million instances. We introduce a new family of algorithms based on mixtures of Nyström approximations, ensemble Nyström algorithms, that yield more accurate kernel approximations than the standard Nyström method. We present extensive empirical results on data sets containing up to 1M points demonstrating the improvement over the standard Nyström approximation. Finally, we present a stability bound for Kernel Ridge Regresion based on the norm of the kernel approximation to help determine the impact of the kernel approximation on the generalization error.
منابع مشابه
Making Large-Scale Nyström Approximation Possible
The Nyström method is an efficient technique for the eigenvalue decomposition of large kernel matrices. However, in order to ensure an accurate approximation, a sufficiently large number of columns have to be sampled. On very large data sets, the SVD step on the resultant data submatrix will soon dominate the computations and become prohibitive. In this paper, we propose an accurate and scalabl...
متن کاملRegularized Computation of Approximate Pseudoinverse of Large Matrices Using Low-Rank Tensor Train Decompositions
We propose a new method for low-rank approximation of Moore-Penrose pseudoinverses (MPPs) of large-scale matrices using tensor networks. The computed pseudoinverses can be useful for solving or preconditioning large-scale overdetermined or underdetermined systems of linear equations. The computation is performed efficiently and stably based on the modified alternating least squares (MALS) schem...
متن کاملLarge-Scale Nyström Kernel Matrix Approximation Using Randomized SVD
The Nyström method is an efficient technique for the eigenvalue decomposition of large kernel matrices. However, to ensure an accurate approximation, a sufficient number of columns have to be sampled. On very large data sets, the singular value decomposition (SVD) step on the resultant data submatrix can quickly dominate the computations and become prohibitive. In this paper, we propose an accu...
متن کاملSpectral Regularization Algorithms for Learning Large Incomplete Matrices
We use convex relaxation techniques to provide a sequence of regularized low-rank solutions for large-scale matrix completion problems. Using the nuclear norm as a regularizer, we provide a simple and very efficient convex algorithm for minimizing the reconstruction error subject to a bound on the nuclear norm. Our algorithm Soft-Impute iteratively replaces the missing elements with those obtai...
متن کاملLiterature survey on low rank approximation of matrices
Low rank approximation of matrices has been well studied in literature. Singular value decomposition , QR decomposition with column pivoting, rank revealing QR factorization (RRQR), Interpolative decomposition etc are classical deterministic algorithms for low rank approximation. But these techniques are very expensive (O(n 3) operations are required for n × n matrices). There are several rando...
متن کامل